A genetic algorithm with deterministic mutation based on neural network learning

Author(s):  
Minoru Fukumi ◽  
Norio Akamatsu
2011 ◽  
Vol 460-461 ◽  
pp. 26-31
Author(s):  
Cheng Yang ◽  
Qun Wu ◽  
Jian Feng Wu

A method of product innovation design was presented. Based on product gene and interactive genetic algorithm, designs of products evolved into new programs, with which make customers be satisfied. In the evolutionary process, a similar model, which was made by neural network learning, was used to evaluate the fitness of products. This method not only shortened the time which was taken in the process of evolution, but also to avoid the decline in the quality of evaluations which resulted from the mental fatigue of user, and ensured the accuracy of the solution.


A genetic algorithm is proposed to us to prevent a local minimum defect when using the BP neural network learning algorithm. The genetic algorithm is first used to optimize the weight and threshold of the BP neural network, and then obtained values are used to optimize the BP neural network. Optimized network performance is estimated using simulation data. The results of numerical simulations show that the BP neural network optimized by the genetic algorithm can effectively eliminate a local minimum defect, which is easy to find in the original BP neural network, and has the advantages of fast convergence rate and high accuracy. Keywords BP neural network; genetic algorithm; local minimum defect; optimization


2011 ◽  
Vol 131 (11) ◽  
pp. 1889-1894
Author(s):  
Yuta Tsuchida ◽  
Michifumi Yoshioka

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


Sign in / Sign up

Export Citation Format

Share Document